摘要 :
The real-time visualization of 3D GIS at a whole city scale always faces the challenge of dynamic data loading with high-efficiency. Based on the multi-tier distributed 3D GIS framework, this paper presents a multi-level cache app...
展开
The real-time visualization of 3D GIS at a whole city scale always faces the challenge of dynamic data loading with high-efficiency. Based on the multi-tier distributed 3D GIS framework, this paper presents a multi-level cache approach for dynamic data loading. It aims to establish in 3D GIS spatial database engine (3DGIS-SDE) the unified management mechanism of caches on three levels, including: the client memory cache (CMC) oriented to sharing application, the client file cache (CFC) organized by index, as well as the application server memory cache (ASMC) of structural consistency. With the help of the proposed optimized cache replacement policy, multi-level cache consistency maintenance as well as multithread loading model designed in the paper, the engine is able to adaptively make full use of each-level caches according to their own application properties and achieve effective coordination between them. Finally, a practical 3D GIS database based on Oracle 11g is employed for test. The experimental results prove this approach could satisfy multi-user concurrent applications of 3D visual exploration.
收起
摘要 :
Caching has long been employed in computer systems to improve performance at the expense of additional complexity in memory organisation and management. The coherence schemes developed for traditional large-scale systems (CC-NUMA)...
展开
Caching has long been employed in computer systems to improve performance at the expense of additional complexity in memory organisation and management. The coherence schemes developed for traditional large-scale systems (CC-NUMA) fail when applied to the vastness of today's mobile internet. A quality of service (QoS) approach is ideally suited for a general-purpose internet cache coherence protocol, providing strong consistency when needed while permitting weaker consistency for less critical data. An inexpensive, QOS solution to internet cache coherence is presented, and an experimental framework is outlined to verify the potential of the proposed scheme as a viable coherence solution for general internet applications.
收起
摘要 :
As the computing power of high-performance computing (HPC) systems is developing to exascale, the storage systems are stretched to their limits to process the growing I/O traffic. Researchers are building storage systems on top of...
展开
As the computing power of high-performance computing (HPC) systems is developing to exascale, the storage systems are stretched to their limits to process the growing I/O traffic. Researchers are building storage systems on top of compute node-local fast storage devices (such as NVMe SSD) to alleviate the I/O bottleneck. However, user jobs have varying requirements of I/O bandwidth; therefore, it is a serious waste of expensive storage devices to have them on all compute nodes and build them into a global storage system. In addition, current node-local storage systems need to cope with the challenging small I/O and rank 0 I/O pattern from HPC workloads. In this paper, we presented a workload-aware temporary cache (WatCache) to meet above challenges. We designed a workload-aware node allocation method to allocate fast storage devices to jobs according to their I/O requirements and merged the devices of the jobs into separate temporary cache spaces. We implemented a metadata caching strategy that reduces the metadata overhead of I/O requests to improve the performance of small I/O. We designed a data layout strategy that distributes consecutive data that exceeds a threshold to multiple devices to achieve higher aggregate bandwidth for rank 0 I/O. Through extensive tests with several I/O benchmarks and applications, we have validated that WatCache offers linearly scalable performance, and brings significant performance promotions to small I/O and rank 0 I/O patterns.
收起
摘要 :
The emergence and rapid spread of interest and use of cloud computing as an accessible and expandable, as needed, computingfacility on the go, has a very deep affinity to the proliferation of intelligent mobile devices including s...
展开
The emergence and rapid spread of interest and use of cloud computing as an accessible and expandable, as needed, computingfacility on the go, has a very deep affinity to the proliferation of intelligent mobile devices including smartphones and tablets. Together, these technologies have the potential of not leaving anybody behind when it comes to computing applications whether small and personal or large and organizational, and regardless of geographic boundaries and economical conditions. However, many technical challenges still exist that are still delaying the realization of this dream with the responsiveness and quality needed from the user perspective. In this paper, we examine user requirements for access to the cloud through thin clients, handheld and mobile devices. In light of these requirements we characterize some of the needed research developments particularly in the area of device architecture. We present our work in exploring the cache design space for embedded processors using evolutionary techniques for mobile and thin client processors. We present a heuristic, evolutionary approach (genetic algorithm) to exploration that significantly cuts down on the time and resources, obtaining a near optimal design. We demonstrate the real-world utility of our tool-chain?"CERE" (pronounced SIRI) short for (CachE Recommendation Engine)?by rapidly and efficiently designing a cache hierarchy, which maximizes the performance of a web browser navigating to a set of popular websites running on a single ARM core. The goal is to improve the users' experience using web browsers. "CERE" made the right choices, and we were able to observe a 17.1% speedup going from the "best" hierarchy relative to the "worst" hierarchy. We will detail potential future directions as well.
收起
摘要 :
Proxy caching has been used to speed up Web browsing and reduce networking costs. In this paper, we study the extension of proxy caching techniques to streaming video applications. A trivial extension consists of storing complete ...
展开
Proxy caching has been used to speed up Web browsing and reduce networking costs. In this paper, we study the extension of proxy caching techniques to streaming video applications. A trivial extension consists of storing complete video sequences in the cache. However, this may not be applicable in situations where the video objects are very large and proxy cache space is limited. We show that the approaches proposed in this paper (referred to as selective caching), where only a few frames are cached, can also contribute to significant improvements in the overall performance. In particular, we discuss two network environments for streaming video, namely, quality-of-service (QoS) networks and best-effort networks (Internet). For QoS networks, the video caching goal is to reduce the network bandwidth costs; for best-effort networks, the goal is to increase the robustness of continuous playback against poor network conditions (such as congestion, delay, and loss). Two different selective caching algorithms (SCQ and SCB) are proposed, one for each network scenario, to increase the relevant overall performance metric in each case, while requiring only a fraction of the video stream to be cached. The main contribution of our work is to provide algorithms that are efficient even when the buffer memory available at the client is limited. These algorithms are also scalable so that when changes in the environment occur it is possible, with low complexity, to modify the allocation of cache space to different video sequences.
收起